106 research outputs found

    Safety Intelligence and Legal Machine Language: Do We Need the Three Laws of Robotics?

    Get PDF
    In this chapter we will describe a legal framework for Next Generation Robots (NGRs) that has safety as its central focus. The framework is offered in response to the current lack of clarity regarding robot safety guidelines, despite the development and impending release of tens of thousands of robots into workplaces and homes around the world. We also describ

    Smartphone-based vase design: a developing creative practice

    Get PDF
    peer-reviewedThis article describes a developing creative practice whereby digital creative processes adapted from mobile music making are used in the data driven design and subsequent digital instantiation of ceramic vessels. First, related work in mobile music creation and recent developments in the digital design and fabrication of ceramics frames the research and puts it in a broad context. A pilot study is then detailed, concluding that although largely successful, a number of areas of the process needed to be improved and refined. The results of a further iteration of the process, consisting of the digital creation and instantiation of location-specific vessels is presented, before the current state of the research, where ceramic vessels are 3D printed, is outlined. We show that mobile phones can become integral to a practical design process that allows the digital forms it creates to be instantiated using 3D printing, and that these become high-quality, end-use artefacts. The final section discusses what has been learned and contemplates how the described practice will be developed yet further

    IRLbot: design and performance analysis of a large-scale web crawler

    Get PDF
    This thesis shares our experience in designing web crawlers that scale to billions of pages and models their performance. We show that with the quadratically increasing complexity of verifying URL uniqueness, breadth-first search (BFS) crawl order, and fixed per-host rate-limiting, current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly-branching spam, legitimate multi-million-page blog sites, and infinite loops created by server-side scripts. We offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6:3 billion valid HTML pages (7:6 billion connection requests) and sustained an average download rate of 319 mb/s (1,789 pages/s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the web graph with 41 billion unique nodes
    • …
    corecore